Goto

Collaborating Authors

 threshold gate



24c523085d10743633f9964e0623dbe0-Supplemental-Conference.pdf

Neural Information Processing Systems

We show that there are monotone data sets that cannot be interpolated by a monotone network of depth 2. On the other hand, we prove that for every monotone data set with n points in Rd, there exists an interpolating monotone network of depth 4 and size O(nd). Our interpolation result implies that every monotone function over [0,1]d can be approximated arbitrarily well by a depth4 monotone network, improving the previous best-known construction of depth d+1.



Nearest Neighbor Representations of Neural Circuits

Kilic, Kordag Mehmet, Sima, Jin, Bruck, Jehoshua

arXiv.org Artificial Intelligence

Neural networks successfully capture the computational power of the human brain for many tasks. Similarly inspired by the brain architecture, Nearest Neighbor (NN) representations is a novel approach of computation. We establish a firmer correspondence between NN representations and neural networks. Although it was known how to represent a single neuron using NN representations, there were no results even for small depth neural networks. Specifically, for depth-2 threshold circuits, we provide explicit constructions for their NN representation with an explicit bound on the number of bits to represent it. Example functions include NN representations of convex polytopes (AND of threshold gates), IP2, OR of threshold gates, and linear or exact decision lists.


Exponential Lower Bounds for Threshold Circuits of Sub-Linear Depth and Energy

Uchizawa, Kei, Abe, Haruki

arXiv.org Artificial Intelligence

In this paper, we investigate computational power of threshold circuits and other theoretical models of neural networks in terms of the following four complexity measures: size (the number of gates), depth, weight and energy. Here the energy complexity of a circuit measures sparsity of their computation, and is defined as the maximum number of gates outputting non-zero values taken over all the input assignments. As our main result, we prove that any threshold circuit $C$ of size $s$, depth $d$, energy $e$ and weight $w$ satisfies $\log (rk(M_C)) \le ed (\log s + \log w + \log n)$, where $rk(M_C)$ is the rank of the communication matrix $M_C$ of a $2n$-variable Boolean function that $C$ computes. Thus, such a threshold circuit $C$ is able to compute only a Boolean function of which communication matrix has rank bounded by a product of logarithmic factors of $s,w$ and linear factors of $d,e$. This implies an exponential lower bound on the size of even sublinear-depth threshold circuit if energy and weight are sufficiently small. For other models of neural networks such as a discretized ReLE circuits and decretized sigmoid circuits, we prove that a similar inequality also holds for a discretized circuit $C$: $rk(M_C) = O(ed(\log s + \log w + \log n)^3)$.


On Neuronal Capacity

Baldi, Pierre, Vershynin, Roman

Neural Information Processing Systems

We define the capacity of a learning machine to be the logarithm of the number (or volume) of the functions it can implement. We review known results, and derive new results, estimating the capacity of several neuronal models: linear and polynomial threshold gates, linear and polynomial threshold gates with constrained weights (binary weights, positive weights), and ReLU neurons. We also derive some capacity estimates and bounds for fully recurrent networks, as well as feedforward networks.


On Neuronal Capacity

Baldi, Pierre, Vershynin, Roman

Neural Information Processing Systems

We define the capacity of a learning machine to be the logarithm of the number (or volume) of the functions it can implement. We review known results, and derive new results, estimating the capacity of several neuronal models: linear and polynomial threshold gates, linear and polynomial threshold gates with constrained weights (binary weights, positive weights), and ReLU neurons. We also derive some capacity estimates and bounds for fully recurrent networks, as well as feedforward networks.


Neural Computation with Winner-Take-All as the Only Nonlinear Operation

Maass, Wolfgang

Neural Information Processing Systems

Everybody "knows" that neural networks need more than a single layer of nonlinear units to compute interesting functions. We show that this is false if one employs winner-take-all as nonlinear unit: - Any boolean function can be computed by a single k-winner-takeall unit applied to weighted sums of the input variables.


Neural Computation with Winner-Take-All as the Only Nonlinear Operation

Maass, Wolfgang

Neural Information Processing Systems

Everybody "knows" that neural networks need more than a single layer of nonlinear units to compute interesting functions. We show that this is false if one employs winner-take-all as nonlinear unit: - Any boolean function can be computed by a single k-winner-takeall unit applied to weighted sums of the input variables.


Neural Computation with Winner-Take-All as the Only Nonlinear Operation

Maass, Wolfgang

Neural Information Processing Systems

Everybody "knows" that neural networks need more than a single layer ofnonlinear units to compute interesting functions. We show that this is false if one employs winner-take-all as nonlinear unit: - Any boolean function can be computed by a single k-winner-takeall unitapplied to weighted sums of the input variables.